首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36774篇
  免费   2066篇
  国内免费   3158篇
系统科学   4540篇
丛书文集   906篇
教育与普及   113篇
理论与方法论   78篇
现状及发展   449篇
综合类   35907篇
自然研究   5篇
  2024年   90篇
  2023年   414篇
  2022年   597篇
  2021年   713篇
  2020年   755篇
  2019年   616篇
  2018年   646篇
  2017年   777篇
  2016年   825篇
  2015年   1199篇
  2014年   1791篇
  2013年   1574篇
  2012年   2317篇
  2011年   2334篇
  2010年   1773篇
  2009年   2158篇
  2008年   1993篇
  2007年   2762篇
  2006年   2523篇
  2005年   2256篇
  2004年   2006篇
  2003年   1760篇
  2002年   1499篇
  2001年   1216篇
  2000年   1124篇
  1999年   1006篇
  1998年   760篇
  1997年   751篇
  1996年   633篇
  1995年   525篇
  1994年   476篇
  1993年   420篇
  1992年   368篇
  1991年   336篇
  1990年   325篇
  1989年   225篇
  1988年   202篇
  1987年   125篇
  1986年   63篇
  1985年   24篇
  1984年   11篇
  1983年   2篇
  1982年   2篇
  1981年   19篇
  1955年   7篇
排序方式: 共有10000条查询结果,搜索用时 281 毫秒
91.
使用2006—2016年宜昌断面及朱沱断面水质数据与影响因素数据,计算水质变化幅度,基于VAR模型的脉冲响应函数和方差分解,得到各影响因素对三峡水库出入库断面水质变化幅度的影响时长及贡献率大小.研究表明:长江从朱沱断面到宜昌断面水质变化幅度2006年为0. 07,到2016年下降到-0. 24,在年度上呈好转趋势.水质综合指数对工业污染源废水排放量的响应最大,持续时间最长;对农药施用折纯量和化肥施用折纯量的响应非常小,持续时间最短,但受其间接影响的持续时间非常长.主要驱动因素在研究时间内的贡献率由大到小依次为:工业污染源废水排放量贡献率为38. 10%、城镇生活污水排放量贡献率为33. 78%、船舶油污水贡献率为24. 17%、化肥施用折纯量贡献率为2. 61%、农药施用折纯量贡献率为1. 33%.不同污染物指数变化的影响因素不同,溶解氧的主要影响因素为船舶油污水产生量,高锰酸盐的主要影响因素为工业废水排放量,氨氮的主要影响因素为城镇生活污水排放量.  相似文献   
92.
基于 2003年—2016年我国资源衰退型城市产业结构调整与环境相关数据,测度了资源衰退型城市产业结构调整与工业三废减排的耦合协调度,并进一步应用面板VAR 模型对产业结构调整与环境污染排放的动态互动关系进行定量分析.研究结果表明,目前我国资源衰退型城市产业结构调整与污染减排的耦合协调度仍处于濒临失调状态.脉冲响应图显示,产业结构高级化促进工业废水、工业二氧化硫的减排具有长期效应,工业废水减排有助于倒逼产业结构高级化.我国资源衰退型城市转型发展,应进一步推动产业结构的调整,大力发展第三产业与高新技术产业,加快经济结构优化升级,以产业结构的高级化推动城市的绿色高质量发展.  相似文献   
93.
针对当前用户画像工作中各模态信息不能被充分利用的问题, 提出一种跨模态学习思想, 设计一种基于多模态融合的用户画像模型。首先利用 Stacking集成方法, 融合多种跨模态学习联合表示网络, 对相应的模型组合进行学习, 然后引入注意力机制, 使得模型能够学习不同模态的表示对预测结果的贡献差异性。改进后的模型具有精心设计的网络结构和目标函数, 能够生成一个由特征级融合和决策级融合组成的联合特征表示, 从而可以合并不同模态的相关特征。在真实数据集上的实验结果表明, 所提模型优于当前最好的基线方法。  相似文献   
94.
很多学者用“全球恐怖主义研究数据库”GTD数据集,采用博弈论、K近邻法和支持向量机等分析恐怖事件的聚集性,已经取得一些成果.但在前期研究中未有很好考虑数据的稀疏性以及高维度多冗余等会导致聚集分类准确率不高的问题.本文提出一种基于最小冗余最大相关与因子分解机结合的TFM分类模型,使用增量搜索方法寻找近似最优的特征解决高维度多冗余问题和FM方法解决数据稀疏问题,并对预处理后的恐怖袭击事件数据用TFM模型做量化分类.文中使用朴素贝叶斯NB、支持向量机SVM、逻辑回归LR与TFM等4个模型的“马修斯相关系数”MCC进行比较,结果显示TFM的MCC相对于其他三个模型NB、SVM、LR分别提高了49.9%,2.5%,2.3%,可见TFM模型有一定可行性.  相似文献   
95.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   
96.
In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.  相似文献   
97.
This paper presents a new spatial dependence model with an adjustment of feature difference. The model accounts for the spatial autocorrelation in both the outcome variables and residuals. The feature difference adjustment in the model helps to emphasize feature changes across neighboring units, while suppressing unobserved covariates that are present in the same neighborhood. The prediction at a given unit incorporates components that depend on the differences between the values of its main features and those of its neighboring units. In contrast to conventional spatial regression models, our model does not require a comprehensive list of global covariates necessary to estimate the outcome variable at the unit, as common macro-level covariates are differenced away in the regression analysis. Using the real estate market data in Hong Kong, we applied Gibbs sampling to determine the posterior distribution of each model parameter. The result of our empirical analysis confirms that the adjustment of feature difference with an inclusion of the spatial error autocorrelation produces better out-of-sample prediction performance than other conventional spatial dependence models. In addition, our empirical analysis can identify components with more significant contributions.  相似文献   
98.
We consider finite state-space non-homogeneous hidden Markov models for forecasting univariate time series. Given a set of predictors, the time series are modeled via predictive regressions with state-dependent coefficients and time-varying transition probabilities that depend on the predictors via a logistic/multinomial function. In a hidden Markov setting, inference for logistic regression coefficients becomes complicated and in some cases impossible due to convergence issues. In this paper, we aim to address this problem utilizing the recently proposed Pólya-Gamma latent variable scheme. Also, we allow for model uncertainty regarding the predictors that affect the series both linearly — in the mean — and non-linearly — in the transition matrix. Predictor selection and inference on the model parameters are based on an automatic Markov chain Monte Carlo scheme with reversible jump steps. Hence the proposed methodology can be used as a black box for predicting time series. Using simulation experiments, we illustrate the performance of our algorithm in various setups, in terms of mixing properties, model selection and predictive ability. An empirical study on realized volatility data shows that our methodology gives improved forecasts compared to benchmark models.  相似文献   
99.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   
100.
We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Dynamic factors are then extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in-sample and out-of-sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore, using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号